11 research outputs found
NAS-LID: Efficient Neural Architecture Search with Local Intrinsic Dimension
One-shot neural architecture search (NAS) substantially improves the search
efficiency by training one supernet to estimate the performance of every
possible child architecture (i.e., subnet). However, the inconsistency of
characteristics among subnets incurs serious interference in the optimization,
resulting in poor performance ranking correlation of subnets. Subsequent
explorations decompose supernet weights via a particular criterion, e.g.,
gradient matching, to reduce the interference; yet they suffer from huge
computational cost and low space separability. In this work, we propose a
lightweight and effective local intrinsic dimension (LID)-based method NAS-LID.
NAS-LID evaluates the geometrical properties of architectures by calculating
the low-cost LID features layer-by-layer, and the similarity characterized by
LID enjoys better separability compared with gradients, which thus effectively
reduces the interference among subnets. Extensive experiments on NASBench-201
indicate that NAS-LID achieves superior performance with better efficiency.
Specifically, compared to the gradient-driven method, NAS-LID can save up to
86% of GPU memory overhead when searching on NASBench-201. We also demonstrate
the effectiveness of NAS-LID on ProxylessNAS and OFA spaces. Source code:
https://github.com/marsggbo/NAS-LID.Comment: Accepted by AAAI2023, AutoML, NA
Communication-Efficient Distributed Deep Learning: A Comprehensive Survey
Distributed deep learning becomes very common to reduce the overall training
time by exploiting multiple computing devices (e.g., GPUs/TPUs) as the size of
deep models and data sets increases. However, data communication between
computing devices could be a potential bottleneck to limit the system
scalability. How to address the communication problem in distributed deep
learning is becoming a hot research topic recently. In this paper, we provide a
comprehensive survey of the communication-efficient distributed training
algorithms in both system-level and algorithmic-level optimizations. In the
system-level, we demystify the system design and implementation to reduce the
communication cost. In algorithmic-level, we compare different algorithms with
theoretical convergence bounds and communication complexity. Specifically, we
first propose the taxonomy of data-parallel distributed training algorithms,
which contains four main dimensions: communication synchronization, system
architectures, compression techniques, and parallelism of communication and
computing. Then we discuss the studies in addressing the problems of the four
dimensions to compare the communication cost. We further compare the
convergence rates of different algorithms, which enable us to know how fast the
algorithms can converge to the solution in terms of iterations. According to
the system-level communication cost analysis and theoretical convergence speed
comparison, we provide the readers to understand what algorithms are more
efficient under specific distributed environments and extrapolate potential
directions for further optimizations
Reliable and Efficient In-Memory Fault Tolerance of Large Language Model Pretraining
Extensive system scales (i.e. thousands of GPU/TPUs) and prolonged training
periods (i.e. months of pretraining) significantly escalate the probability of
failures when training large language models (LLMs). Thus, efficient and
reliable fault-tolerance methods are in urgent need. Checkpointing is the
primary fault-tolerance method to periodically save parameter snapshots from
GPU memory to disks via CPU memory. In this paper, we identify the frequency of
existing checkpoint-based fault-tolerance being significantly limited by the
storage I/O overheads, which results in hefty re-training costs on restarting
from the nearest checkpoint. In response to this gap, we introduce an in-memory
fault-tolerance framework for large-scale LLM pretraining. The framework boosts
the efficiency and reliability of fault tolerance from three aspects: (1)
Reduced Data Transfer and I/O: By asynchronously caching parameters, i.e.,
sharded model parameters, optimizer states, and RNG states, to CPU volatile
memory, Our framework significantly reduces communication costs and bypasses
checkpoint I/O. (2) Enhanced System Reliability: Our framework enhances
parameter protection with a two-layer hierarchy: snapshot management processes
(SMPs) safeguard against software failures, together with Erasure Coding (EC)
protecting against node failures. This double-layered protection greatly
improves the survival probability of the parameters compared to existing
checkpointing methods. (3) Improved Snapshotting Frequency: Our framework
achieves more frequent snapshotting compared with asynchronous checkpointing
optimizations under the same saving time budget, which improves the fault
tolerance efficiency. Empirical results demonstrate that Our framework
minimizes the overhead of fault tolerance of LLM pretraining by effectively
leveraging redundant CPU resources.Comment: Fault Tolerance, Checkpoint Optimization, Large Language Model, 3D
parallelis
FusionAI: Decentralized Training and Deploying LLMs with Massive Consumer-Level GPUs
The rapid growth of memory and computation requirements of large language
models (LLMs) has outpaced the development of hardware, hindering people who
lack large-scale high-end GPUs from training or deploying LLMs. However,
consumer-level GPUs, which constitute a larger market share, are typically
overlooked in LLM due to their weaker computing performance, smaller storage
capacity, and lower communication bandwidth. Additionally, users may have
privacy concerns when interacting with remote LLMs. In this paper, we envision
a decentralized system unlocking the potential vast untapped consumer-level
GPUs in pre-training, inference and fine-tuning of LLMs with privacy
protection. However, this system faces critical challenges, including limited
CPU and GPU memory, low network bandwidth, the variability of peer and device
heterogeneity. To address these challenges, our system design incorporates: 1)
a broker with backup pool to implement dynamic join and quit of computing
providers; 2) task scheduling with hardware performance to improve system
efficiency; 3) abstracting ML procedures into directed acyclic graphs (DAGs) to
achieve model and task universality; 4) abstracting intermediate represention
and execution planes to ensure compatibility of various devices and deep
learning (DL) frameworks. Our performance analysis demonstrates that 50 RTX
3080 GPUs can achieve throughputs comparable to those of 4 H100 GPUs, which are
significantly more expensive